Gemma/[Gemma_2]Using_with_LocalGemma.ipynb (333 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "7QCqLJkVc2Fh"
},
"source": [
"##### Copyright 2024 Google LLC."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "bkysk5wedUe5"
},
"outputs": [],
"source": [
"# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "F6DdwaTLdj6f"
},
"source": [
"# Getting Started with Gemma 2 and Local Gemma\n",
"\n",
"[Gemma](https://ai.google.dev/gemma) is a family of lightweight, state-of-the-art open-source language models from Google. Built from the same research and technology used to create the Gemini models, Gemma models are text-to-text, decoder-only large language models (LLMs), available in English, with open weights, pre-trained variants, and instruction-tuned variants.\n",
"Gemma models are well-suited for various text-generation tasks, including question-answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop, or your cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone.\n",
"\n",
"[Local Gemma](https://github.com/huggingface/local-gemma) provides an easy and fast way to run Gemma-2 locally from your CLI or via a Python library. The Huggingface's [Transformers](https://github.com/huggingface/transformers) and [bitsandbytes](https://huggingface.co/docs/bitsandbytes/index) libraries serve as the foundation for it.\n",
"\n",
"In this notebook, you will learn how to run Gemma 2 models using **Local Gemma** in a Google Colab environment. You'll install the necessary packages, set up the model, and run a sample prompt.\n",
"\n",
"<table align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Gemma/[Gemma_2]Using_with_LocalGemma.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aKZoU1elgfSF"
},
"source": [
"## Setup\n",
"\n",
"### Select the Colab runtime\n",
"To complete this tutorial, you'll need to have a Colab runtime with sufficient resources to run the Gemma model. In this case, you can use a T4 GPU:\n",
"\n",
"1. In the upper-right of the Colab window, select **▾ (Additional connection options)**.\n",
"2. Select **Change runtime type**.\n",
"3. Under **Hardware accelerator**, select **T4 GPU**.\n",
"\n",
"### Gemma setup\n",
"\n",
"**Before you dive into the tutorial, let's get you set up with Gemma:**\n",
"\n",
"1. **Hugging Face Account:** If you don't already have one, you can create a free Hugging Face account by clicking [here](https://huggingface.co/join).\n",
"2. **Gemma Model Access:** Head over to the [Gemma model page](https://huggingface.co/collections/google/gemma-2-release-667d6600fd5220e7b967f315) and accept the usage conditions.\n",
"3. **Colab with Gemma Power:** For this tutorial, you'll need a Colab runtime with enough resources to handle the Gemma 2B model. Choose an appropriate runtime when starting your Colab session.\n",
"4. **Hugging Face Token:** Generate a Hugging Face access (preferably `write` permission) token by clicking [here](https://huggingface.co/settings/tokens). You'll need this token later in the tutorial.\n",
"\n",
"**Once you've completed these steps, you're ready to move on to the next section where you'll set up environment variables in your Colab environment.**\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ceBbRmolghKS"
},
"source": [
"### Configure your HF token\n",
"\n",
"Add your Hugging Face token to the Colab Secrets manager to securely store it.\n",
"\n",
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
"2. Create a new secret with the name `HF_TOKEN`.\n",
"3. Copy/paste your token key into the Value input box of `HF_TOKEN`.\n",
"4. Toggle the button on the left to allow notebook access to the secret."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "aa_KvIwqgkM4"
},
"outputs": [],
"source": [
"import os\n",
"from google.colab import userdata\n",
"\n",
"# Note: `userdata.get` is a Colab API. If you're not using Colab, set the env\n",
"# vars as appropriate for your system.\n",
"os.environ[\"HF_TOKEN\"] = userdata.get(\"HF_TOKEN\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_uaNoIIovh0P"
},
"source": [
"### Install dependencies\n",
"\n",
"You'll need to install a few Python packages and dependencies for Local Gemma and interact with HuggingFace.\n",
"\n",
"Run the following cell to install or upgrade it:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "luCD2jFZvik8"
},
"outputs": [],
"source": [
"# The huggingface_hub library allows us to download models and other files from Hugging Face.\n",
"!pip install --upgrade -q huggingface_hub\n",
"\n",
"# Install local-gemma \"CUDA\" version\n",
"!pip install -q local-gemma\"[cuda]\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ver08PM1P1AB"
},
"source": [
"## Run Local Gemma directly\n",
"\n",
"If you want to chat with the Gemma-2 through an interactive session, run the `local-gemma` command directly. If you are running this command for the first time, it will download the model to the cache.\n",
"\n",
"`local-gemma` command comes with a set of options. You will use `--model` and `--silent` here.\n",
"\n",
"| Option | Details |\n",
"|--------|---------|\n",
"| `--model`| Size of Gemma 2 instruct model to be used in the application ('2b', '9b', or '27b') or, a Hugging Face repo. Defaults to '9b'.|\n",
"|`--silent`| Does NOT print any output except for the model outputs.|\n",
"\n",
"**Note:**\n",
"Input `!exit` to leave the program and input `!new session` to reset the conversation."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vDbcZJTSO3w9"
},
"source": [
"Use `local-gemma` to run the 2b version of Gemma 2 in command line."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Nw10uOtHQhqV"
},
"outputs": [],
"source": [
"!local-gemma --silent --model 2b"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aHNBAV5QOB37"
},
"source": [
"To know about all the available options for `local-gemma`, run the following command."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EICej-FROMxg"
},
"outputs": [],
"source": [
"!local-gemma -h"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GvNTOw_vQnCA"
},
"source": [
"Alternatively, you can pass a prompt to generate a single response from the model.\n",
"\n",
"`!local-gemma <prompt>`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FcbqapZOzXif"
},
"outputs": [],
"source": [
"!local-gemma List top 5 popular programming languages"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "17G2kX_1SVuh"
},
"source": [
"## Running the Gemma 2 model using the `local_gemma` library\n",
"\n",
"You can use **Local Gemma** in your Python code using the `local_gemma` library.\n",
"\n",
"You will initialize the `LocalGemma2ForCausalLM` from the `local_gemma` library by loading a pre-trained Gemma 2 model from HuggingFace. Here's what each argument of `from_pretrained` does:\n",
"\n",
"- `pretrained_model_name_or_path`: Path to the model.\n",
"- `preset`: Sets the optimization strategy for the local model deployment. Defaults to 'auto', which selects the best strategy for your device.\n",
"\n",
"For this example, you will use the 9b version of Gemma 2."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DuKCJqPpLkce"
},
"outputs": [],
"source": [
"from local_gemma import LocalGemma2ForCausalLM\n",
"from transformers import AutoTokenizer\n",
"\n",
"model = LocalGemma2ForCausalLM.from_pretrained(\n",
" \"google/gemma-2-2b\", preset=\"memory\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OpCbI_fAQbsD"
},
"source": [
"### Interacting with the Gemma model\n",
"Finally, you can prompt the Gemma 2 model to generate a response.\n",
"\n",
"To do this, you will first tokenize the input prompt by initializing the tokenizer for the selected model(`google/gemma-2-9b`) and passing the prompt as its argument. To learn more about tokenizer and its arguments, read [HuggingFace's Tokenizer documentation](https://huggingface.co/docs/transformers/en/main_classes/tokenizer).\n",
"\n",
"You can then query the model using the tokenized prompt. The response generated by the model is tokenized. It must be decoded into sentences."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qzdHfI13QS_L"
},
"outputs": [],
"source": [
"tokenizer = AutoTokenizer.from_pretrained(\"google/gemma-2-2b\")\n",
"\n",
"model_inputs = tokenizer(\"What is your favourite condiment?\",\n",
" return_attention_mask=True,\n",
" return_tensors=\"pt\")\n",
"generated_ids = model.generate(**model_inputs.to(model.device), max_new_tokens=30)\n",
"\n",
"decoded_text = tokenizer.batch_decode(generated_ids)\n",
"decoded_text"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZGisEuosrOe4"
},
"source": [
"To learn more about the Python usage of **Local Gemma**, visit [Local Gemma's GitHub page](https://github.com/huggingface/local-gemma?tab=readme-ov-file#python-usage)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pCwGdjKdVEBN"
},
"source": [
"## Conclusion"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9-htC4GdoPNV"
},
"source": [
"Congratulations! You've successfully set up the Gemma 2 model using **Local Gemma** in a Colab environment. You can now experiment with the model, generate text, and explore its capabilities."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "[Gemma_2]Using_with_LocalGemma.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}